Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 2067-2071, 2023.
Article in English | Scopus | ID: covidwho-20243456

ABSTRACT

In today's computer systems, the mouse is an essential input device. Touch interfaces are high-contact planes that we use on a regular basis and frequently throughout the period. As a result, the input device gets infested with bacteria and pathogens. Despite the fact that wireless mouse have eliminated the bunch of tangled wires, there is still a desire to tap the gadget. In light of the epidemic, this proposed method employs a outlying webcam or an in-built image sensor to capture arm gestures and identify fingertip detection, allowing users to execute standard mouse activities such as left click, scrolling and other mouse activities. The algorithm is trained using machine learning with the use of image sensor and the fingers are identified efficiently. As a result, this reliance on corporeal devices to manage the computational system cancels out the requirement of man-machine interface. Thus the suggested approach will prevent the proliferation of Covid-19. © 2023 IEEE.

2.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 220-225, 2023.
Article in English | Scopus | ID: covidwho-20232798

ABSTRACT

The whole world has been witnessing the gigantic enemy in the form of COVID-19 since March 2020. With its super-fast spread, it has devastated a major part of the world and found to be the most dangerous virus of the 21st Century. All countries went into a lockdown to control the spread of the virus, and the economy dropped down to an all- time low index. The major guideline to avoid the spread of diseases like COVID- 19 at work is avoiding contact with people and their belongings. It is not safe to use computing devices because it may result in the spread of the virus by touching them. This paper presents an Artificial Intelligence- based virtual mouse that detects or recognizes hand gestures to control the various functions of a personal computer. The virtual mouse Algorithm uses a webcam or a built-in camera of the system to capture hand gestures, then uses an algorithm to detect the palm boundaries similar to that of the face detection model of the media pipe face mesh algorithm. After tracing the palm boundaries, it uses a regression model and locates the 21 3D hand-knuckle coordinate points inside the recognized hand/palm boundaries. Once the Hand Landmarks are detected, they are used to call windows Application Programming Interface (API) functions to control the functionalities of the system. The proposed algorithm is tested for volume control and cursor control in a laptop with the Windows operating system and a webcam. The proposedsystem took only 1ms to identify the gestures and control the volume and cursor in real-time. © 2023 IEEE.

3.
Interactive Learning Environments ; 2023.
Article in English | Web of Science | ID: covidwho-20231404

ABSTRACT

Technological advances and COVID-19 have led to expedited technology use and online learning in higher education. Increased technology use and online learning have led individuals to either adapt or experience technostress. Higher education is a ripe context for technostress to occur, especially for students, since many courses are being offered in a hybrid and/or synchronous online format due to COVID-19. Students have often been required and/or encouraged to use multiple technologies, especially webcams, during online courses. Thus, this study explores the technostress students could be experiencing from requested webcam use as well as potential influencers and outcomes of technostress for students via exploring factors from Davis's [Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340] technology acceptance model in a new proposed model. Results indicated the model was a significant predictor for digital skills, perceived ease of use, technostress, and cognitive learning for students being required or not required to use webcams. Implications for researchers and instructors as well as future research directions are discussed.

4.
Applied Computational Intelligence and Soft Computing ; 2023, 2023.
Article in English | ProQuest Central | ID: covidwho-2315840

ABSTRACT

Covid-19 has been a life-changer in the sphere of online education. With complete lockdown in various countries, there has been a tumultuous increase in the need for providing online education, and hence, it has become mandatory for examiners to ensure that a fair methodology is followed for evaluation, and academic integrity is met. A plethora of literature is available related to methods to mitigate cheating during online examinations. A systematic literature review (SLR) has been followed in our article which aims at introducing the research gap in terms of the usage of soft computing techniques to combat cheating during online examinations. We have also presented state-of-the-art methods followed, which are capable of mitigating online cheating, namely, face recognition, face expression recognition, head posture analysis, eye gaze tracking, network data traffic analysis, and detection of IP spoofing. A discussion on improvement of existing online cheating detection systems has also been presented.

5.
Íkala ; 27(2):292-311, 2022.
Article in English | ProQuest Central | ID: covidwho-2292848

ABSTRACT

As a result of the pandemic generated by covid-19, educational institutions began to teach their classes remotely. During these, the majority of students opted to keep the webcam turned off, causing demotivation and uncertainty among teachers. The different studies that inves­tigate this behavior present atomized and unconnected reasons. This is why this research intended to offer a holistic view of the reasons that led students to activate or not activate the camera during the pandemic. Data obtained through a questionnaire administered to 305 students from different Spanish universities revealed a tendency to follow the decision of the majority regarding whether or not to connect the camera during classes, as well as a reluctance to show themselves to classmates in relaxed environments. On the other hand, although no significant differences were found between men and women regarding the frequency of use, male participants attributed less importance to both the projected personal im­age and the connection resources available. Results also show that the importance attributed both to the generation of so­cial presence in the classroom and to the academic qualification is a predictor of its frequency of use. These results suggest the use of strategies that encourage higher education students to connect the webcam during the teaching through videoconferencing systems.Alternate : Em decorrência da pandemia gerada pela covid-19, as instituições de ensino passaram a ministrar suas aulas remotamente, por meio de sistemas de videoconferência, durante os quais a maioria dos alunos optou por manter a webcam desligada, causando desmotivação e incerteza entre os professores. Os diferentes estudos que investigam esse comportamento apresentam motivos atomizados e desconexos, por isso esta pesquisa oferece uma visão holística dos motivos que levaram os alunos a ativar ou não a câmera durante a pandemia do covid-19. Com base nos dados obtidos através de um questionário aplicado a 305 estudantes de diferentes universidades espanholas, esta pesquisa explicativa identifica dimensões de natureza social, pessoal, econômica e acadêmica que, combinadas, parecem revelar tais razões. Os resultados mostram uma tendência em seguir a decisão da maioria de conectar ou não a câmera durante as aulas, bem como uma relutância em se mostrar aos colegas em ambientes descontraídos. Por outro lado, embora não tenham sido encontradas diferenças significativas entre homens e mulheres quanto à frequência de uso, os participantes do sexo masculino atribuíram menos importância tanto à imagem pessoal projetada quanto aos recursos de conexão disponíveis ao manter o telefone ligado. Além disso, mostram que a importância atribuída tanto à geração de presença social em sala de aula quanto à qualificação acadêmica são preditores de sua frequência de uso. Esses resultados sugerem o uso de estratégias não impostas que estimulem estudantes do ensino superior a conectarem a webcam durante o ensino por meio de sistemas de videoconferência.Alternate : Como resultado de la pandemia generada por la covid-19, las instituciones edu­cativas pasaron a impartir sus clases de manera remota durante las que el alumnado optó, de forma mayoritaria, por mantener apagada la cámara web, provocando desmotivación e incertidumbre entre el profesorado. Los distintos estudios que investigan este comportamiento presentan motivos atomizados e inconexos, por lo que la presente investigación se propuso ofrecer una visión holística de los motivos que llevaron al alumnado a activar o no la cámara durante dicha pandemia. Los datos obtenidos a través de un cuestionario suministrado a 305 alumnos de distintas universida­des españolas revelan una tendencia a seguir la decisión de la mayoría con respecto a conectar o no la cámara durante las clases, así como una reticencia a mostrarse ante compañeros en entornos relajados. Por otra parte, si bien no se encontraron diferencias significativas entre hombres y mujeres ace ca de la frecuencia de uso, los participantes de género masculino atribuyeron me­nor importancia tanto a la imagen personal proyectada como a los recursos de conexión disponibles. Asimismo, se encontró que la importancia atribuida tanto a la generación de presencia social en el aula como a la calificación académica es un predictor de su frecuencia de uso. Estos resultados sugieren el uso de estrategias que incentiven al alumnado de educación superior a conectar la cámara web durante la impartición de docencia por medio de sistemas de videoconferencia.

6.
3rd International and Interdisciplinary Conference on Image and Imagination, IMG 2021 ; 631 LNNS:799-808, 2023.
Article in English | Scopus | ID: covidwho-2291996

ABSTRACT

E-Learning has shown to be an important resource, particularly in recent times due to the limitations in the Sars-CoV-2 pandemic. Several ways to deliver lessons through the Internet were used but both instructors and students complained about visual outputs. An evaluation of the most proficient techniques to create video-based lessons is highly relevant and critical. Seventy-eight students participated to 30 h of university online courses delivered through MS Teams, in which OBS (Open Broadcaster Software) Studio was used to create the lessons. The software allowed merging: a) MS Powerpoint slides, b) the instructor through a webcam, c) pictures of background sceneries. After the end of the courses, students filled in a questionnaire evaluating pictures taken from different e-learning sceneries. The OBS-based situation obtained the best evaluation in all measures (fruition, attention keeping and promotion of learning) and the highest rank when participants were asked to compare all the sceneries. These results confirm that students prefer reality-based sceneries, in which the most informative aspects (face, body and voice of the instructor, and the slides used for the lesson) are all present. Beside other obvious factors related to the quality of teaching, e-learning should also definitely consider visual features. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

7.
Robotics ; 12(2):58, 2023.
Article in English | ProQuest Central | ID: covidwho-2302665

ABSTRACT

With the occurrence of pandemics, such as COVID-19, which lead to social isolation, there is a need for home rehabilitation procedures without the direct supervision of health professionals. The great difficulty of treatment at home is the cost of the conventional equipment and the need for specialized labor to operate it. Thus, this paper aimed to develop serious games to assist health professionals in the physiotherapy of patients with spinal pain for clinical and home applications. Serious games integrate serious aspects such as teaching, rehabilitation, and information with the playful and interactive elements of video games. Despite the positive indication and benefits of physiotherapy for cases of chronic spinal pain, the long treatment time, social isolation due to pandemics, and lack of motivation to use traditional methods are some of the main causes of therapeutic failure. Using Unity 3D (version 2019.4.24f1) software and a personal computer with a webcam, we developed aesthetically pleasing, smooth, and attractive games, while maintaining the essence of seriousness that is required for rehabilitation. The serious games, controlled using OpenPose (version v1.0.0alpha-1.5.0) software, were tested with a healthy volunteer. The findings demonstrated that the proposed games can be used as a playful tool to motivate patients during physiotherapy and to reduce cases of treatment abandonment, including at home.

8.
4th International Conference on Circuits, Control, Communication and Computing, I4C 2022 ; : 511-514, 2022.
Article in English | Scopus | ID: covidwho-2274225

ABSTRACT

The study's goal is to create a detector that detects and analyses whether pedestrians or individuals in public gatherings are maintaining social distancing. Drone-shot videos, live webcam feeds, and photographs are all kinds of input for the detector. With no human intervention, Dynamic Detection through live stream provides safety and simplifies monitoring of social distance. The webcam input can be integrated with an external webcam or a drone's camera. Furthermore, the YOLOv4 algorithm is used for the data set for the initial phase ofobject detection, identifying various items in each frame. The recognized objects are narrowed down to humans, and the Euclidian distance between one data point and every other data point is determined The Euclidian distance determines if they are maintaining the minimal distance between them or not by depicting them with a colored border box. Euclidian distance assists in detecting if they are keeping the minimal distance between them or not, as shown by a coloredboundary box, red for unsafe and green for safe, with an indication reflecting the number of people in danger. © 2022 IEEE.

9.
19th IEEE India Council International Conference, INDICON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2267269

ABSTRACT

POSE ESTIMATION is a technique to identify joints in a human body from an image or video given as input to a computer. Pose estimation can be performed using Machine Learning (ML) techniques and Deep Learning techniques. Lately, it has been receiving lots of attention in the fields of Human Sensing and Artificial Intelligence. The main aim of pose estimation is to predict the poses of humans by locating key points like elbows, knees, wrists etc.In this work, we have proposed a model which uses Mediapipe, an ML framework, to obtain key point coordinates and ML algorithms like SVM, Gaussian Naive Bayes, Random Forest, Gradient Boost and K Neighbours classifier, which are compared and used to predict Yoga poses. Yoga is practised by people of all ages alike these days to fight issues caused both physically and mentally, thus improving the overall quality of life. Especially since the rise of the COVID-19 pandemic, the number of people practising yoga has only been increasing. In the model, human joint coordinates obtained are used as features. The model with the best accuracy and f score (MediaPipe+ SVM) is chosen for the final work.The yoga poses we used are Plank, Warrior 2, Downdog, Goddess, Tree and Cobra. On implementing the work, a real-time video feed from the webcam of the user's system is obtained, and pose estimation and classification of the yoga pose are done. Unlike in most current systems, suggestive measures to correct the yoga posture are also displayed in real-time alongside the webcam display of the person performing yoga along with some other basic pose information. © 2022 IEEE.

10.
Information Technology and Tourism ; 2023.
Article in English | Scopus | ID: covidwho-2252341

ABSTRACT

Using the conceptual frameworks and theories of virtual tourism, telepresence and para-social interactions, this exploratory study investigates an innovative campaign employed by a nature-based wildlife tourism operator as a response to the COVID-19 lockdowns and travel restrictions of 2020/21. Insights are provided into a unique model of webcam livestreaming that is scheduled, hosted and interactive. Over 73,000 social media comments and 590 survey responses from webcam viewers were analysed and indicate that watching the livestream had positive impacts for tourism recovery and conservation action. Research findings suggest that interactive webcam travel can affect travel behaviour and conservation awareness and action in part through building and engaging online communities and supporting a sense of connection with nature. This study contributes new knowledge to the emerging research on webcam livestreaming in tourism. As a subset of virtual tourism, interactive webcam travel emerges as an alternative to more costly forms of virtual reality for industry practitioners and stakeholders to engage new and old audiences, especially in the context of tourism recovery initiatives after disasters and crises that prevent or limit physical visitation. © 2023, The Author(s).

11.
International Conference on 4th Industrial Revolution Based Technology and Practices, ICFIRTP 2022 ; : 262-267, 2022.
Article in English | Scopus | ID: covidwho-2280902

ABSTRACT

In computer system keyboard is the most prominent input medium of all time. But lately, human community is living in an era of global pandemic being afraid of suffering from Coronavirus (Covid-19) and hence each and every person avoids touching anything. This is because of the fear of contracting this contagious virus and their mutants. So, to mitigate this issue, we present a method "webcam based virtual keyboard interface"to interact with a computer system. The code of this method is written using pre-built modules like OpenCV, MediaPipe, PyVDA, Win32API, etc. and Python 3.9. This approach uses matching the index finger and middle finger on the specific key. After that the virtual desktop switching mechanism is done by PyVDA. The PyttSX3 library plays the sound whenever any key is pressed or when a desktop switch is initiated, corresponding to the key pressed or the desktop switched. In this approach no additional hardware device other than the webcam, that is already available in the system, is required. This approach is also useful for those persons who wants the access the system, even when their hands are dirty. © 2022 IEEE.

12.
IEEE Journal on Selected Areas in Communications ; 41(1):107-118, 2023.
Article in English | Scopus | ID: covidwho-2245641

ABSTRACT

Video represents the majority of internet traffic today, driving a continual race between the generation of higher quality content, transmission of larger file sizes, and the development of network infrastructure. In addition, the recent COVID-19 pandemic fueled a surge in the use of video conferencing tools. Since videos take up considerable bandwidth ( ∼ 100 Kbps to a few Mbps), improved video compression can have a substantial impact on network performance for live and pre-recorded content, providing broader access to multimedia content worldwide. We present a novel video compression pipeline, called Txt2Vid, which dramatically reduces data transmission rates by compressing webcam videos ('talking-head videos') to a text transcript. The text is transmitted and decoded into a realistic reconstruction of the original video using recent advances in deep learning based voice cloning and lip syncing models. Our generative pipeline achieves two to three orders of magnitude reduction in the bitrate as compared to the standard audio-video codecs (encoders-decoders), while maintaining equivalent Quality-of-Experience based on a subjective evaluation by users ( n=242 ) in an online study. The Txt2Vid framework opens up the potential for creating novel applications such as enabling audio-video communication during poor internet connectivity, or in remote terrains with limited bandwidth. The code for this work is available at https://github.com/tpulkit/txt2vid.git. © 1983-2012 IEEE.

13.
IEEE Journal on Selected Areas in Communications ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-2152491

ABSTRACT

Video represents the majority of internet traffic today, driving a continual race between the generation of higher quality content, transmission of larger file sizes, and the development of network infrastructure. In addition, the recent COVID-19 pandemic fueled a surge in the use of video conferencing tools. Since videos take up considerable bandwidth (~100 Kbps to a few Mbps), improved video compression can have a substantial impact on network performance for live and pre-recorded content, providing broader access to multimedia content worldwide. We present a novel video compression pipeline, called Txt2Vid, which dramatically reduces data transmission rates by compressing webcam videos (“talking-head videos”) to a text transcript. The text is transmitted and decoded into a realistic reconstruction of the original video using recent advances in deep learning based voice cloning and lip syncing models. Our generative pipeline achieves two to three orders of magnitude reduction in the bitrate as compared to the standard audio-video codecs (encoders-decoders), while maintaining equivalent Quality-of-Experience based on a subjective evaluation by users (n = 242) in an online study. The Txt2Vid framework opens up the potential for creating novel applications such as enabling audio-video communication during poor internet connectivity, or in remote terrains with limited bandwidth. The code for this work is available at https://github.com/tpulkit/txt2vid.git. IEEE

14.
2nd Asian Conference on Innovation in Technology, ASIANCON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2136099

ABSTRACT

The COVID-19 pandemic has dramatically changed the education sector, which led to a rise in online learning. E-learning has exponentially increased and it has become difficult to conduct exams offline too. Need for online education and conducting online exams has led various institutes and educational sectors to develop a platform where exams can be proctored. The ability to proctor online examinations more effectively has become a crucial factor in the education sector. Presently, human proctoring is the most common approach of evaluation, by monitoring them during exams via a webcam. It has become challenging for supervisors/teachers to examine and conduct online exams. In addition to that, studies say that there has been a significant rise in cheating during online exams. The main objective of the project is to conduct honest, faithful examination by developing an AI based online proctoring system which will help teachers to check the examinee's authenticity, eliminate suspicious behaviour during the exam and keep track of students throughout the exam time. In addition to that, conducting smooth and honest examinations will increase everyone's trust in the online education sector. Proposed project is a website developed using AI algorithms such as CNN/RNN algorithm which monitors students during the exam, allowing students to attend the exam from any location, which will help in detecting any malicious activities via webcam and guarantee fair evaluation of exams. Vision based tracking consists of Eye ball tracking, Lip movement, additional member detection in frame and more. The website will also ensure that the candidate is sharing his/her screen. Conclusion, the proposed project provides an online platform with features like view students screen anytime, send warning messages, terminate someone's exam, eye tracker, lip tracker for conducting effective exams. © 2022 IEEE.

15.
Journal of Computer Assisted Learning ; 2022.
Article in English | Web of Science | ID: covidwho-2108072

ABSTRACT

Background The existing literature has predominantly focused on instructor social presence in videos in an asynchronous learning environment and little is known about student social presence on webcam in online learning in the context of COVID-19. Objectives This paper therefore contrasts students' and teachers' perspectives on student social presence on webcam in synchronous online teaching through co-orientation analysis. Methods Data were collected through an online questionnaire with 14 statements that measured participants' perceptions of webcam use in three constructs in social presence (i.e., emotional expression, open communication, and cohesion). 154 students and 36 teachers from two higher education institutions in Hong Kong responded to the questionnaire, and their responses were analysed using the co-orientation model. Results and conclusion Results reveal the perceptual gaps between teachers and students on the use of webcam to promote student social presence by showing how teachers were comparatively more positive about its impacts for learning and consistently overestimated students' preference for it. Through analysing individual constructs/items, this paper argues that using webcams in synchronous online learning could enhance student social presence only to a limited extent in that it may help improve emotional expression and open communication but not cohesion. Implications The paper advises against the adoption of a clear-cut policy that webcams should be either recommended or not recommended for online learning. Instead, teachers should take into account students' perspective to find out the types of activities that are apt for using webcams in online learning, and reflective tasks and oral assessments were amongst the ones considered appropriate by students in the study.

16.
1st International Conference on Intelligent Controller and Computing for Smart Power, ICICCSP 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2051999

ABSTRACT

For corporate and private groups, providing security and secure access to workplaces has long been a top priority. From keypads to fingerprint sensors, there have been advancements in the way security is delivered over the years. Even these, though, have their flaws and weaknesses. Computer Vision is a more powerful and modern technique which can be integrated into a security system for the purpose of increasing the overall level of security. This project aims to create a security system that utilizes this software as well as a temperature sensing module to enable secure, monitored and contact-less, access. The facial authentication is achieved with a help of a webcam connected to the system and a python program on which this is executed, after which the main control is transferred to the Arduino UNO Microcontroller board which tests the two incoming inputs and provides access based on its decision. A training model is employed which studies the given images of the users and detects them when entry is requested. © 2022 IEEE.

17.
Interactive Learning Environments ; : 1-14, 2022.
Article in English | Academic Search Complete | ID: covidwho-2050925

ABSTRACT

One of the phenomena that lecturers who switched to online distance learning during COVID-19 reported is the refusal of students to turn on their cameras during online classes. This study aimed to examine the factors that predict the opening of cameras in class. The study examined this issue regarding three types of predictors: resistance factors, learning environment factors, and personal factors. The population included 205 students from higher education institutions in Israel who studied online during the COVID-19 period. Data were collected using an online questionnaire and analyzed using quantitative and qualitative methods. The findings show that camera opening among students during academic classes is indeed relatively low and only partial. The study also revealed four rejection factors to turning on cameras and personal characteristics, such as gender and self-image, that predict students’ rate of turning on cameras. However, the more the lecturers demanded to open cameras, the higher the students’ responsiveness, and the smaller the classroom, the greater the willingness to turn on cameras. Finally, the findings may help lecturers better understand the students’ perspectives of camera use in online classes and develop effective strategies to increase turning on cameras by students. [ FROM AUTHOR] Copyright of Interactive Learning Environments is the property of Routledge and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

18.
Telehealth and Medicine Today ; 6(4), 2021.
Article in English | ProQuest Central | ID: covidwho-2026487

ABSTRACT

As telehealth is increasingly adopted across all care settings, it is important to understand how clinicians can adapt and respond to patient needs. Drawing from experiences of a virtual primary care physician and a patient advocate, this Perspectives editorial provides additional insights beyonds the telehealth basics for establishing digital empathy and a remote therapeutic connection.

19.
180th Meeting of the Acoustical Society of America, ASA 2021 ; 43, 2021.
Article in English | Scopus | ID: covidwho-2019671

ABSTRACT

The COVID-19 pandemic forced my teaching and all interactions with my students to be conducted entirely online via Zoom from March 2020 through May 2021. Reflecting on this experience, I have been surprised to realize that there are several aspects of teaching online over Zoom which I will miss when I return to the classroom. In this paper I describe my teach-from-home studio which enabled me to maximize online interaction with my students, and how I was able to bring some much-needed humor into my online classes using Zoom virtual backgrounds and costumes and later a small art mannequin placed and a dedicated webcam. In addition, I discuss some ways I was able to encourage students to interact with each other and with me. A surprising observation was an increased level of engagement between myself and my online students, especially the distance education students with whom I normally have little interaction. There were also some things that did not work over Zoom, such as the elaborate classroom demonstrations which I normally use on a regular basis. This paper concludes with lessons learned along with things I hope to retain and/or change when I return to teaching in-person in a classroom. This paper was presented at the 180th meeting of the Acoustical Society of America, June 7-9, 2021. This meeting, nicknamed "Acoustics in Focus" was a virtual, online meeting that took place during the COVID-19 pandemic. This paper was part of an Education in Acoustics special session, Reflections on Teaching Acoustics During a Pandemic. © 2021 Acoustical Society of America.

20.
3rd International Conference for Emerging Technology, INCET 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2018891

ABSTRACT

In the Covid-19 age, we are becoming increasingly reliant on virtual interactions like as Zoom and Google meetings / Teams chat. The videos received from live webcamera in virtual interactions become great source for researchers to understand the human emotions. Due to the numerous applications in human-computer interaction, analysis of emotion from facial expressions has piqued the interest of the newest research community (HCI). The primary objective of this study is to assess various emotions using unique facial expressions captured via a live web camera. Traditional approaches (Conventional FER) rely on manual feature extraction before classifying the emotional state, whereas Deep Learning, Convolutional Neural Networks, and Transfer Learning are now widely used for emotional classification due to their advanced feature extraction mechanisms from images. In this implementation, we will use the most advanced deep learning models, MTCNN and VGG-16, to extract features and classify seven distinct emotions based on their facial landmarks in live video. Using the FER2013 standard dataset, we achieved a maximum accuracy of 97.23 percent for training and 60.2 percent for validation for emotion classification. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL